59 research outputs found

    Radioactive elements in coal and their possible impacts

    Get PDF

    ΠŸΡ€ΠΎΠ΅ΠΊΡ‚ΠΈΡ€ΠΎΠ²Π°Π½ΠΈΠ΅ ΠΈ рСализация модуля Π°Π½Π°Π»ΠΈΠ·Π° ΠΏΡƒΠ±Π»ΠΈΠΊΠ°Ρ†ΠΈΠΎΠ½Π½ΠΎΠΉ активности унивСрситСта

    Get PDF
    Π Π°Π±ΠΎΡ‚Π° Π½Π°ΠΏΡ€Π°Π²Π»Π΅Π½Π° Π½Π° созданиС ΠΏΡ€ΠΎΠ³Ρ€Π°ΠΌΠΌΠ½ΠΎΠ³ΠΎ Ρ€Π΅ΡˆΠ΅Π½ΠΈΡ, Π°Π²Ρ‚ΠΎΠΌΠ°Ρ‚ΠΈΠ·ΠΈΡ€ΡƒΡŽΡ‰Π΅Π³ΠΎ статистичСский Π°Π½Π°Π»ΠΈΠ· ΠΏΡƒΠ±Π»ΠΈΠΊΠ°Ρ†ΠΈΠΎΠ½Π½ΠΎΠΉ активности Π½Π°ΡƒΡ‡Π½ΠΎ-ΠΎΠ±Ρ€Π°Π·ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒΠ½ΠΎΠΉ ΠΎΡ€Π³Π°Π½ΠΈΠ·Π°Ρ†ΠΈΠΈ. Π Π΅Π·ΡƒΠ»ΡŒΡ‚Π°Ρ‚ΠΎΠΌ Ρ€Π°Π±ΠΎΡ‚Ρ‹ являСтся ΠΌΠΎΠ΄ΡƒΠ»ΡŒ, ΠΎΠ±Π΅ΡΠΏΠ΅Ρ‡ΠΈΠ²Π°ΡŽΡ‰ΠΈΠΉ ΠΈΠ½Ρ„ΠΎΡ€ΠΌΠ°Ρ†ΠΈΠΎΠ½Π½ΡƒΡŽ ΠΏΠΎΠ΄Π΄Π΅Ρ€ΠΆΠΊΡƒ процСссов управлСния Π½Π°ΡƒΡ‡Π½ΠΎΠΉ Π΄Π΅ΡΡ‚Π΅Π»ΡŒΠ½ΠΎΡΡ‚ΡŒΡŽ, Π²ΠΊΠ»ΡŽΡ‡Π°Ρ ΠΎΡ†Π΅Π½ΠΊΡƒ Π°ΠΊΡ‚ΡƒΠ°Π»ΡŒΠ½ΠΎΡΡ‚ΠΈ Ρ€Π΅Π°Π»ΠΈΠ·ΡƒΠ΅ΠΌΡ‹Ρ… Π½Π°ΡƒΡ‡Π½ΠΎ-ΠΈΡΡΠ»Π΅Π΄ΠΎΠ²Π°Ρ‚Π΅Π»ΡŒΡΠΊΠΈΡ… ΠΏΡ€ΠΎΠ΅ΠΊΡ‚ΠΎΠ².The work is aimed at creating a software solution that automates the statistical analysis of the publication activity of a scientific and educational organization. The result of the work is a module that provides information support for the management of scientific activities, including an assessment of the relevance of ongoing research projects

    Object-Centric Learning with Slot Attention

    Full text link
    Learning object-centric representations of complex scenes is a promising step towards enabling efficient abstract reasoning from low-level perceptual features. Yet, most deep learning approaches learn distributed representations that do not capture the compositional properties of natural scenes. In this paper, we present the Slot Attention module, an architectural component that interfaces with perceptual representations such as the output of a convolutional neural network and produces a set of task-dependent abstract representations which we call slots. These slots are exchangeable and can bind to any object in the input by specializing through a competitive procedure over multiple rounds of attention. We empirically demonstrate that Slot Attention can extract object-centric representations that enable generalization to unseen compositions when trained on unsupervised object discovery and supervised property prediction tasks

    Video OWL-ViT: Temporally-consistent open-world localization in video

    Full text link
    We present an architecture and a training recipe that adapts pre-trained open-world image models to localization in videos. Understanding the open visual world (without being constrained by fixed label spaces) is crucial for many real-world vision tasks. Contrastive pre-training on large image-text datasets has recently led to significant improvements for image-level tasks. For more structured tasks involving object localization applying pre-trained models is more challenging. This is particularly true for video tasks, where task-specific data is limited. We show successful transfer of open-world models by building on the OWL-ViT open-vocabulary detection model and adapting it to video by adding a transformer decoder. The decoder propagates object representations recurrently through time by using the output tokens for one frame as the object queries for the next. Our model is end-to-end trainable on video data and enjoys improved temporal consistency compared to tracking-by-detection baselines, while retaining the open-world capabilities of the backbone detector. We evaluate our model on the challenging TAO-OW benchmark and demonstrate that open-world capabilities, learned from large-scale image-text pre-training, can be transferred successfully to open-world localization across diverse videos.Comment: ICCV 202
    • …
    corecore